Reducing Training Time of Deep Learning Based Digital Backpropagation by Stacking

نویسندگان

چکیده

A method for reducing the training time of a deep learning based digital backpropagation (DL-DBP) is presented. The on dividing link into smaller sections. section then compensated by DL-DBP algorithm and same trained model reapplied to subsequent We show in 32 GBd 16QAM 2400 km 5-channel wavelength division multiplexing transmission experiment that proposed stacked DL-DBPs provides 0.41 dB gain with respect linear compensation scheme. This needs be compared 0.56 achieved non-stacked scheme price 203% increase total time. Furthermore, it shown only last DL-DBP, one can performance 0.48 dB.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Boosted Backpropagation Learning for Training Deep Modular Networks

Divide-and-conquer is key to building sophisticated learning machines: hard problems are solved by composing a network of modules that solve simpler problems [13, 16, 4]. Many such existing systems rely on learning algorithms which are based on simple parametric gradient descent where the parametrization must be predetermined, or more specialized per-application algorithms which are usually ad-...

متن کامل

Training Deep Networks with Structured Layers by Matrix Backpropagation

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive f...

متن کامل

Parallel Training of Deep Stacking Networks

The Deep Stacking Network (DSN) is a special type of deep architecture developed to enable and benefit from parallel learning of its model parameters on large CPU clusters. As a prospective key component of future speech recognizers, the architectural design of the DSN and its parallel training endow the DSN with scalability over a vast amount of training data. In this paper, we present our fir...

متن کامل

Training Deep Spiking Neural Networks Using Backpropagation

Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signa...

متن کامل

Parallel Training for Deep Stacking Networks

The Deep Stacking Network (DSN) is a special type of deep architecture developed to enable and benefit from parallel learning of its model parameters on large CPU clusters. As a prospective key component of future speech recognizers, the architectural design of the DSN and its parallel training endow the DSN with scalability over a vast amount of training data. In this paper, we present our fir...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Photonics Technology Letters

سال: 2022

ISSN: ['1941-0174', '1041-1135']

DOI: https://doi.org/10.1109/lpt.2022.3162157